50 research outputs found
A sensor fusion layer to cope with reduced visibility in SLAM
Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained
A sensor fusion layer to cope with reduced visibility in SLAM
Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained
Recommended from our members
MoDSeM: modular framework for distributed semantic mapping. 'Embedded intelligence: enabling & supporting RAS technologies'
This paper presents MoDSeM, a novel software framework for spatial perception supporting teams of robots. MoDSeM aims to provide a semantic mapping approach able to represent all spatial information perceived in autonomous missions involving teams of field robots, and to formalize the development of perception software, promoting the development of reusable modules that can fit varied team constitutions. Preliminary experiments took place in simulation, using a 100x100x100m simulated map to demonstrate our work-in-progress prototype's ability to receive, store and retrieve spatial information. Results show the appropriateness of ROS and OpenVDB as back-ends for supporting the prototype, achieving promising performance in all aspects of the task and supporting future developments
Reply to: Comments on “Particle Swarm Optimization with Fractional-Order Velocity”
We agree with Ling-Yun et al. [5] and Zhang and Duan comments [2] about the typing error in equation (9) of the manuscript [8]. The correct formula was initially proposed in [6, 7]. The formula adopted in our algorithms discussed in our papers [1, 3, 4, 8] is, in fact, the following: ..
Chemotaxis Based Virtual Fence for Swarm Robots in Unbounded Environments
This paper presents a novel swarm robotics application of chemotaxis behaviour observed in microorganisms. This approach was used to cause exploration robots to return to a work area around the swarm’s nest within a boundless environment. We investigate the performance of our algorithm through extensive simulation studies and hardware validation. Results show that the chemotaxis approach is effective for keeping the swarm close to both stationary and moving nests. Performance comparison of these results with the unrealistic case where a boundary wall was used to keep the swarm within a target search area showed that our chemotaxis approach produced competitive results
Comparison of bio-inspired algorithms applied to the coordination of mobile robots considering the energy consumption
Many applications, related to autonomous mobile robots, require to explore in an unknown environment searching for static targets, without any a priori information about the environment topology and target locations. Targets in such rescue missions can be fire, mines, human victims, or dangerous material that the robots have to handle. In these scenarios, some cooperation among the robots is required for accomplishing the mission. This paper focuses on the application of different bio-inspired metaheuristics for the coordination of a swarm of mobile robots that have to explore an unknown area in order to rescue and handle cooperatively some distributed targets. This problem is formulated by first defining an optimization model and then considering two sub-problems: exploration and recruiting. Firstly, the environment is incrementally explored by robots using a modified version of ant colony optimization. Then, when a robot detects a target, a recruiting mechanism is carried out to recruit a certain number of robots to deal with the found target together. For this latter purpose, we have proposed and compared three approaches based on three different bio-inspired algorithms (Firefly Algorithm, Particle Swarm Optimization, and Artificial Bee Algorithm). A computational study and extensive simulations have been carried out to assess the behavior of the proposed approaches and to analyze their performance in terms of total energy consumed by the robots to complete the mission. Simulation results indicate that the firefly-based strategy usually provides superior performance and can reduce the wastage of energy, especially in complex scenarios